n–consistent estimators: improved convergence rates and rate–adaptive inference
نویسندگان
چکیده
We propose a classical (nonBayesian) Laplace estimator alternative for a large class of 3 p n–consistent estimators, including isotonic density and regression estimators, inverse density and regression estimators, the maximum score and mode regression estimators, and interval censoring and monotone hazard rate estimators. The proposed alternative provides a unified method of smoothing that applies to all examples mentioned above; easier computation is a byproduct in the maximum score case. Depending on the choice of input parameter and the degree of smoothness of a population function, the convergence rate of our estimator can be faster than 3 p n and its limit distribution can be normal. With extreme smoothness, a rate close to p n is achievable. We provide a bias reduction method and an inference procedure which automatically adapts to the correct convergence rate and limit distribution.
منابع مشابه
Adaptive Markov Chain Monte Carlo Confidence Intervals
In Adaptive Markov Chain Monte Carlo (AMCMC) simulation, classical estimators of asymptotic variances are inconsistent in general. In this work we establish that despite this inconsistency, confidence interval procedures based on these estimators remain consistent. We study two classes of confidence intervals, one based on the standard Gaussian limit theory, and the class of so-called fixed-b c...
متن کاملAlmost Sure Convergence Rates for the Estimation of a Covariance Operator for Negatively Associated Samples
Let {Xn, n >= 1} be a strictly stationary sequence of negatively associated random variables, with common continuous and bounded distribution function F. In this paper, we consider the estimation of the two-dimensional distribution function of (X1,Xk+1) based on histogram type estimators as well as the estimation of the covariance function of the limit empirical process induced by the se...
متن کاملOracle Inequalities and Adaptive Rates
We have previously seen how sieve estimators give rise to rates of convergence to the Bayes risk by performing empirical risk minimization over Hk(n), where (Hk)k ≥ 1 is an increasing sequence of sets of classifiers, and k(n) → ∞. However, the rate of convergence depends on k(n). Usually this rate is chosen to minimize the worst-case rate over all distributions of interest. However, it would be...
متن کاملAdaptive estimation of hazard functions
In this paper we obtain convergence rates for sieved maximumlikelihood estimators of the log-hazard function in a censoring model. We also establish convergence results for an adaptive version of the estimator based on the method of structural risk-minimization. Applications are discussed to tensor product spline estimators as well as to neural net and radial basis function sieves. We obtain si...
متن کاملUniformly Root-n Consistent Density Estimators for Weakly Dependent Invertible Linear Processes
Convergence rates of kernel density estimators for stationary time series are well studied. For invertible linear processes, we construct a new density estimator that converges, in the supremum norm, at the better, parametric, rate n. Our estimator is a convolution of two different residual-based kernel estimators. We obtain in particular convergence rates for such residual-based kernel estimat...
متن کامل